6,294 research outputs found

    A Revisit to Quadratic Programming with One Inequality Quadratic Constraint via Matrix Pencil

    Full text link
    The quadratic programming over one inequality quadratic constraint (QP1QC) is a very special case of quadratically constrained quadratic programming (QCQP) and attracted much attention since early 1990's. It is now understood that, under the primal Slater condition, (QP1QC) has a tight SDP relaxation (PSDP). The optimal solution to (QP1QC), if exists, can be obtained by a matrix rank one decomposition of the optimal matrix X? to (PSDP). In this paper, we pay a revisit to (QP1QC) by analyzing the associated matrix pencil of two symmetric real matrices A and B, the former matrix of which defines the quadratic term of the objective function whereas the latter for the constraint. We focus on the \undesired" (QP1QC) problems which are often ignored in typical literature: either there exists no Slater point, or (QP1QC) is unbounded below, or (QP1QC) is bounded below but unattainable. Our analysis is conducted with the help of the matrix pencil, not only for checking whether the undesired cases do happen, but also for an alternative way to compute the optimal solution in comparison with the usual SDP/rank-one-decomposition procedure.Comment: 22 pages, 0 figure

    Iterative Regularization for Learning with Convex Loss Functions

    Get PDF
    We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning

    Xuan Lin, Composition; Student Recital

    Get PDF

    The Necessary And Sufficient Condition for Generalized Demixing

    Full text link
    Demixing is the problem of identifying multiple structured signals from a superimposed observation. This work analyzes a general framework, based on convex optimization, for solving demixing problems. We present a new solution to determine whether or not a specific convex optimization problem built for generalized demixing is successful. This solution will also bring about the possibility to estimate the probability of success by the approximate kinematic formula
    corecore